72 research outputs found
Library abstraction for C/C++ concurrency
When constructing complex concurrent systems, abstraction is vital: programmers should be able to reason about concurrent libraries in terms of abstract specifications that hide the implementation details. Relaxed memory models present substantial challenges in this respect, as libraries need not provide sequentially consistent abstractions: to avoid unnecessary synchronisation, they may allow clients to observe relaxed memory effects, and library specifications must capture these. In this paper, we propose a criterion for sound library abstraction in the new C11 and C++11 concurrency model, generalising the standard sequentially consistent notion of linearizability. We prove that our criterion soundly captures all client-library interactions, both through call and return values, and through the subtle synchronisation effects arising from the memory model. To illustrate our approach, we verify implementations against specifications for the lock-free Treiber stack and a producer-consumer queue. Ours is the first approach to compositional reasoning for concurrent C11/C++11 programs. 1
Towards rigorously faking bidirectional model transformations
Bidirectional model transformations (bx) are mechanisms for auto-matically restoring consistency between multiple concurrently modified models. They are, however, challenging to implement; many model transformation languages not supporting them at all. In this paper, we propose an approach for automatically obtaining the consistency guarantees of bx without the complexities of a bx language. First, we show how to “fake” true bidirectionality using pairs of unidirectional transformations and inter-model consistency constraints in Epsilon. Then, we propose to automatically verify that these transformations are consistency preserving — thus indistinguishable from true bx — by defining translations to graph rewrite rules and nested conditions, and leveraging recent proof calculi for graph transformation verification
A Scalable, Correct Time-stamped Stack
Concurrent data-structures, such as stacks, queues, and deques, often implicitly enforce a total order over elements in their underlying memory layout. However, much of this order is unnecessary: linearizability only requires that elements are ordered if the insert methods ran in sequence. We propose a new approach which uses timestamping to avoid unnecessary ordering. Pairs of elements can be left unordered if their associated insert operations ran concurrently, and order imposed as necessary at the eventual removal. We realise our approach in a new non-blocking datastructure, the TS (timestamped) stack. Using the same approach, we can define corresponding queue and deque datastructures. In experiments on x86, the TS stack outperforms and outscales all its competitors – for example, it outperforms the elimination-backoff stack by factor of two. In our approach, more concurrency translates into less ordering, giving less-contended removal and thus higher performance and scalability. Despite this, the TS stack is linearizable with respect to stack semantics. The weak internal ordering in the TS stack presents a challenge when establishing linearizability: standard techniques such as linearization points work well when there exists a total internal order. We present a new stack theorem, mechanised in Isabelle, which characterises the orderings sufficient to establish stack semantics. By applying our stack theorem, we show that the TS stack is indeed linearizable. Our theorem constitutes a new, generic proof technique for concurrent stacks, and it paves the way for future weakly ordered data-structure designs
Resource-sensitive synchronization inference by abduction
We present an analysis which takes as its input a sequential program, augmented with annotations indicating potential parallelization opportunities, and a sequential proof, written in separation logic, and produces a correctly-synchronized parallelized program and proof of that program. Unlike previous work, ours is not an independence analysis; we insert synchronization constructs to preserve relevant dependencies found in the sequential program that may otherwise be violated by a naive translation. Separation logic allows us to parallelize fine-grained patterns of resource-usage, moving beyond straightforward points-to analysis. Our analysis works by using the sequential proof to discover dependencies between different parts of the program. It leverages these discovered dependencies to guide the insertion of synchronization primitives into the parallelized program, and to ensure that the resulting parallelized program satisfies the same specification as the original sequential program, and exhibits the same sequential behaviour. Our analysis is built using frame inference and abduction, two techniques supported by an increasing number of separation logic tools
Resource-Bound Quantification for Graph Transformation
Graph transformation has been used to model concurrent systems in software
engineering, as well as in biochemistry and life sciences. The application of a
transformation rule can be characterised algebraically as construction of a
double-pushout (DPO) diagram in the category of graphs. We show how
intuitionistic linear logic can be extended with resource-bound quantification,
allowing for an implicit handling of the DPO conditions, and how resource logic
can be used to reason about graph transformation systems
Collective emotions online and their influence on community life
E-communities, social groups interacting online, have recently become an
object of interdisciplinary research. As with face-to-face meetings, Internet
exchanges may not only include factual information but also emotional
information - how participants feel about the subject discussed or other group
members. Emotions are known to be important in affecting interaction partners
in offline communication in many ways. Could emotions in Internet exchanges
affect others and systematically influence quantitative and qualitative aspects
of the trajectory of e-communities? The development of automatic sentiment
analysis has made large scale emotion detection and analysis possible using
text messages collected from the web. It is not clear if emotions in
e-communities primarily derive from individual group members' personalities or
if they result from intra-group interactions, and whether they influence group
activities. We show the collective character of affective phenomena on a large
scale as observed in 4 million posts downloaded from Blogs, Digg and BBC
forums. To test whether the emotions of a community member may influence the
emotions of others, posts were grouped into clusters of messages with similar
emotional valences. The frequency of long clusters was much higher than it
would be if emotions occurred at random. Distributions for cluster lengths can
be explained by preferential processes because conditional probabilities for
consecutive messages grow as a power law with cluster length. For BBC forum
threads, average discussion lengths were higher for larger values of absolute
average emotional valence in the first ten comments and the average amount of
emotion in messages fell during discussions. Our results prove that collective
emotional states can be created and modulated via Internet communication and
that emotional expressiveness is the fuel that sustains some e-communities.Comment: 23 pages including Supporting Information, accepted to PLoS ON
Developing Successful Breeding Programs for New Zealand Aquaculture: A Perspective on Progress and Future Genomic Opportunities
Over the past 40 years New Zealand (NZ) aquaculture has grown into a significant primary industry. Tonnage is small on a global scale, but the industry has built an international reputation for the supply of high quality seafood to many overseas markets. Since the early 1990s the industry has recognized the potential gains from selective breeding and the challenge has been to develop programs that can overcome biological obstacles (such as larval rearing and mortality) and operate cost-effectively on a relatively small scale while still providing significant gains in multiple traits of economic value. This paper provides an overview of the current status, and a perspective on genomic technology implementation, for the family based genetic improvement programs established for the two main species farmed in NZ: Chinook (king) salmon (Oncorhynchus tshawytscha) and GreenshellTM mussel (Perna canaliculus). These programs have provided significant benefit to the industry in which we are now developing genomic resources based on genotyping-by-sequencing to complement the breeding programs, enable evaluation of the genetic diversity and identify the potential benefits of genomic selection. This represents an opportunity to increase genetic gain and more effectively utilize the potential for within family selection
- …